水下图像不可避免地会受到颜色失真和对比度减少的影响。基于统计的方法,例如白平衡和直方图拉伸,试图调整颜色通道的不平衡和狭窄的强度分布,因此性能有限。最近,基于深度学习的方法取得了令人鼓舞的结果。但是,所涉及的架构复杂化和高计算成本可能会阻碍其在实用的约束平台中的部署。受上述作品的启发,我们提出了一个统计学的轻量级水下图像增强网络(USLN)。具体而言,我们首先开发一个双统计的白平衡模块,该模块可以学会使用平均图像和最大图像来补偿每个特定像素的颜色失真。然后是一个多色空间拉伸模块,以适应RGB,HSI和实验室颜色空间中的直方图分布。广泛的实验表明,在统计数据的指导下,USLN大大降低了所需的网络容量(超过98%)并实现最先进的性能。代码和相关资源可在https://github.com/deepxzy/usln上获得。
translated by 谷歌翻译
在多种方案中,多幕科建议专门为用户检索相关项目,这在工业推荐系统中无处不在。这些方案享有用户和项目中的一部分重叠,而不同方案的分布则不同。多阶段建模的关键点是有效地最大程度地利用全幕纳罗来信息,并在多种情况下为用户和项目生成适应性表示。我们总结了三个实用挑战,这些挑战无法很好地解决多幕科建模:(1)在多种情况下缺乏细粒度和脱钩的信息传输控制。 (2)整个空间样品的开发不足。 (3)项目的多幕科代表性分解问题。在本文中,我们提出了一种情景自适应和自我监督(SASS)模型,以解决上述三个挑战。具体而言,我们使用场景自适应门单元设计了多层场景自适应转移(ML-SAT)模块,以相当细粒度且脱钩的方式选择并融合从整个场景到单个场景的有效传输信息。为了充分利用整个空间样品的功能,引入了包括预训练和微调在内的两阶段训练过程。预训练阶段是基于场景监督的对比学习任务,并从标记和未标记的数据空间中绘制的培训样本。该模型是在用户端和项目方面对称创建的,因此我们可以在不同情况下获得项目的区分表示。公共和工业数据集的广泛实验结果证明了SASS模型比最先进的方法的优越性。该模型还可以在在线A/B测试中平均每位用户的观看时间提高8.0%以上。
translated by 谷歌翻译
本文展示了一项修订BART模型的任务,因此它可以从任意单词集中构造句子,这曾经是一个困难的NLP任务。培训任务是用四个单词制作句子,但是当提供较少或更多的单词时,受过训练的模型可以生成句子。总体上,输出句子具有高质量。该模型可以具有一些现实世界的应用程序,并且此任务也可以用作任何语言模型的评估机制。
translated by 谷歌翻译
在这项工作中,我们提出了一个置换不变的语言模型Symphonynet,作为象征性交响音乐生成的解决方案。我们建议使用基于变压器的自动回归语言模型具有特定的3-D位置嵌入的新型多通道可重复的多磁场(MMR)表示,并模拟音乐序列。为了克服长度溢出在建模超长的交响令牌时,我们还提出了一对经过修改的字节对编码算法(音乐bpe)用于音乐令牌,并引入了一种新颖的线性变压器解码器架构作为骨干。同时,我们通过从输入中掩盖仪器信息来训练解码器将自动编排作为联合任务学习。我们还引入了一个大规模的符号交响数据集,以进行交响曲生成研究的发展。经验结果表明,所提出的方法可以产生连贯,新颖,复杂和和谐的交响曲,作为多轨多训练符号音乐生成的先驱解决方案。
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译
Nearest-Neighbor (NN) classification has been proven as a simple and effective approach for few-shot learning. The query data can be classified efficiently by finding the nearest support class based on features extracted by pretrained deep models. However, NN-based methods are sensitive to the data distribution and may produce false prediction if the samples in the support set happen to lie around the distribution boundary of different classes. To solve this issue, we present P3DC-Shot, an improved nearest-neighbor based few-shot classification method empowered by prior-driven data calibration. Inspired by the distribution calibration technique which utilizes the distribution or statistics of the base classes to calibrate the data for few-shot tasks, we propose a novel discrete data calibration operation which is more suitable for NN-based few-shot classification. Specifically, we treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes. Then, we perform NN classification using these discretely calibrated support data. Results from extensive experiments on various datasets show our efficient non-learning based method can outperform or at least comparable to SOTA methods which need additional learning steps.
translated by 谷歌翻译